National Repository of Grey Literature 31 records found  1 - 10nextend  jump to record: Search took 0.01 seconds. 
Acquiring Thesauri from Wikipedia
Novák, Ján ; Schmidt, Marek (referee) ; Otrusina, Lubomír (advisor)
This thesis deals with automatic acquiring thesauri from Wikipedia. It describes Wikipedia as a suitable data set for thesauri acquiring and also methods for computing semantic similarity of terms are described. The thesis also contains a description of concepts and implementation of the system for automatic thesauri acquiring. Finally, the implemented system is evaluated by the standard metrics, such as precision or recall.
Wikipedia Page Classification
Suchý, Ondřej ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
The goal of this paper is to design and implement a system for selection of Wikipedia articles relevant to a given topic in order to reduce the amount of memory taken by its offline version. The solution of this problem was achieved with use of methods from information retrieval and theirs implementation using Elasticsearch search engine. The system tries to determine the area of user's interest by given keywords and make a selection of articles from that area. This is achieved by measuring of similarity of articles and adding all articles from frequent categories in the selection. The sizes of the output files for queries over Simple English Wikipedia are usually below 30 MB.
Information Extraction from Wikipedia
Krištof, Tomáš ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This bachelor's thesis describes the issue of information extraction from unstructured text. The first part contains summary of basic techniques used for information extracting. Thereafter, concept and realization of the system for information extraction from Wikipedia is described. In the last part of thesis, results, coming from experiments, are analysed.
Identifying Entity Types and Attributes Across Languages
Švub, Daniel ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
The target of this thesis is to analyze articles on the Wikipedia internet encyclopedia and to convert their text written in natural language into a structured database of persons, places and other entities. The essence of the implemented program is the determination of the type of entity based on its typical characteristics, and the extraction of the most important attributes of this entity in the Czech and Slovak languages. The result of this task is a knowledge base allowing simple searching and sorting of information. Thanks to its easy extensibility, it is possible to add identification of other types of entities and other features to the program, as well as a support of other languages.
Methods of Information Extraction
Adamček, Adam ; Smrž, Pavel (referee) ; Kouřil, Jan (advisor)
The goal of information extraction is to retrieve relational data from texts written in natural human language. Applications of such obtained information is wide - from text summarization, through ontology creation up to answering questions by QA systems. This work describes design and implementation of a system working in computer cluster which transforms a dump of Wikipedia articles to a set of extracted information that is stored in distributed RDF database with a possibility to query it using created user interface.
Information Retrieval in Czech Wikipedia
Balgar, Marek ; Bartík, Vladimír (referee) ; Chmelař, Petr (advisor)
The main task of this Masters Thesis is to understand questions of information retrieval and text classifi cation. The main research is focused on the text data, the semantic dictionaries and especially the knowledges inferred from the Wikipedia. In this thesis is also described implementation of the querying system, which is based on achieved knowledges. Finally properties and possible improvements of the system are talked over.
Machine Learning for Natural Language Question Answering
Sasín, Jonáš ; Fajčík, Martin (referee) ; Smrž, Pavel (advisor)
This thesis deals with natural language question answering using Czech Wikipedia. Question answering systems are experiencing growing popularity, but most of them are developed for English. The main purpose of this work is to explore possibilities and datasets available and create such system for Czech. In the thesis I focused on two approaches. One of them uses English model ALBERT and machine translation of passages. The other one utilizes the multilingual BERT. Several variants of the system are compared in this work. Possibilities of relevant passage retrieval are also discussed. Standard evaluation is provided for every variant of the tested system. The best system version has been evaluated on the SQAD v3.0 dataset, reaching 0.44 EM and 0.55 F1 score, which is an excellent result compared to other existing systems. The main contribution of this work is the analysis of existing possibilities and setting a benchmark for further development of better systems for Czech.
Identifying Entity Types Based on Information Extraction from Wikipedia
Rusiňák, Petr ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This paper presents a system for identifying entity types of articles on Wikipedia (e.g. people or sports events) that can be used for identifaction of any arbitrary entity. The~input files for this system are a list of several pages that belong to this entity and a list of several pages that do not belong to this entity. These lists will be used to generate features that can be used for generation of the list of all pages belonging to this entity. The fatures can be based on both structured information on Wikipedia such as templates and categories and non-structured informations found by the analysis of natural text in the first sentence of the article where a defining noun that represents what the article is about will be found. This system support pages written in Czech and English and can be extended to support other languages.
Encyclopedia Expert
Krč, Martin ; Schmidt, Marek (referee) ; Smrž, Pavel (advisor)
This project focuses on a system that answers questions formulated in natural language. Firstly, the report discusses problems associated with question answering systems and some commonly employed approaches. Emphasis is laid on shallow methods, which do not require many linguistic resources. The second part describes our work on a system that answers factoid questions, utilizing Czech Wikipedia as a source of information. Answer extraction is partly based on specific features of Wikipedia and partly on pre-defined patterns. Results show that for answering simple questions, the system provides significant improvements in comparison with a standard search engine.
Information Extraction from Wikipedia
Valušek, Ondřej ; Otrusina, Lubomír (referee) ; Smrž, Pavel (advisor)
This thesis deals with automatic type extraction in English Wikipedia articles and their attributes. Several approaches with the use of machine learning will be presented. Furthermore, important features like date of birth in articles regarding people, or area in those about lakes, and many more, will be extracted. With the use of the system presented in this thesis, one can generate a well structured knowledge base, using a file with Wikipedia articles (called dump file) and a small training set containing a few well-classed articles. Such knowledge base can then be used for semantic enrichment of text. During this process a file with so called definition words is generated. Definition words are features extracted by natural text analysis, which could be used also in other ways than in this thesis. There is also a component that can determine, which articles were added, deleted or modified in between the creation of two different knowledge bases.

National Repository of Grey Literature : 31 records found   1 - 10nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.